-
Notifications
You must be signed in to change notification settings - Fork 78
Tracking PR fo v0.21.0 release #740
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…743) refactor: use standard library and P3 utilities for slice operations Replace custom unsafe implementations with idiomatic alternatives: - Use slice::as_chunks() instead of manual pointer arithmetic in group_slice_elements - Use p3-util::as_base_slice() for flatten_slice_elements - Use p3-util::flatten_to_base() for flatten_vector_elements
* refactor: remove winter-compat and p3-compat feature modules The winter-compat and p3-compat modules in miden-serde-utils were temporary compatibility layers added during the Winterfell to Plonky3 migration. Now that the migration is complete (see miden-vm PR #2472 and miden-base PR #2213), these modules are no longer needed. Changes: - Removed winter-compat and p3-compat feature flags from miden-serde-utils - Deleted miden-serde-utils/src/winter_compat.rs - Deleted miden-serde-utils/src/p3_compat.rs - Added Goldilocks serialization implementations directly in miden-serde-utils - Removed compat feature dependencies from workspace and miden-crypto Cargo.toml - Cleaned up cargo-machete configuration The Serializable/Deserializable implementations for Goldilocks are now always available in miden-serde-utils without requiring a feature flag, simplifying the dependency structure. * chore: Changelog
* feat: activate upstream p3 parallel features when concurrent is enabled - Add p3-maybe-rayon as a direct dependency (no_std compatible) - Enable p3-maybe-rayon/parallel when concurrent feature is active - Enable p3-miden-prover/parallel for STARK prover parallelism - Enable p3-util/parallel for utility parallelism * refactor: use p3-maybe-rayon instead of custom iterator imports Replace custom iterators module imports with p3_maybe_rayon::prelude which provides the same IntoParallelRefMutIterator trait. * refactor: remove custom iterator macros in favor of p3-maybe-rayon Delete iter!, iter_mut!, and batch_iter_mut! macros. These are replaced by p3-maybe-rayon which provides the same conditional parallel/serial iteration via IntoParallelIterator and related traits. * refactor(smt): use p3-maybe-rayon instead of rayon directly Replace all direct rayon::prelude imports with p3_maybe_rayon::prelude in SMT concurrent code. This provides the same parallel iteration traits while ensuring consistency with upstream Plonky3 crates. * chore: Changelog * fix: typo in changelog (hoegrown -> homegrown) * feat: re-export p3-maybe-rayon prelude as parallel module
* refactor: reduce test dependencies on std feature Replace `rand::rng()` with seeded `ChaCha20Rng` throughout test code to enable tests to run in no_std environments. This addresses issue #726. Changes: - Replace `rand::rng()` calls with `seeded_rng()` or `ChaCha20Rng::from_seed()` - Change `#[cfg(all(test, feature = "std"))]` gates to `#[cfg(test)]` - Replace `std::collections::HashSet` with `alloc::collections::BTreeSet` - Add `testing` feature that enables proptest for downstream crates - Move `Arbitrary` impl for `Word` to word/mod.rs (gated by test/testing) - Extract common `rng_value` helper in test_utils.rs to reduce duplication Test coverage increased from 167 to 255 tests in no_std mode. * chore: Changelog
Migrate from the Miden-specific `p3-miden-goldilocks` crate to the upstream `p3-goldilocks` from Plonky3. This aligns with the upstream ecosystem and reduces fork maintenance. Changes: - Update Cargo.toml deps to use Plonky3 main branch for p3-* crates - Replace all `p3_miden_goldilocks` imports with `p3_goldilocks` - Add `p3-util` dependency for slice utilities - Update `as_int()` calls to `as_canonical_u64()` (P3 API) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Update p3-miden crates from commit 9553dc68 to 134c14d3. This fixes the cargo-deny CI failure caused by duplicate p3-* crate entries (v0.4.1 from git vs v0.4.2 from crates.io). The newer p3-miden commits properly depend on p3-* v0.4.2 from crates.io instead of v0.4.1 from the Plonky3 git repository. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Add allow-org exception for the 0xMiden GitHub organization to permit p3-miden crates which are not yet published to crates.io. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
This commit primarily exists to sketch out the interface for the new forest implementation, aimed at getting agreement on the interface and basic architecture before implementation begins in earnest. It does this directly on the `LargeSmtForest` type, which will be the user-facing portion of the API and likely later renamed to simply `SmtForest`. As part of doing this, it includes a number of important utility types, including: - `Backend`, which defines the minimal set of features required for the SMT forest's pluggable backends. Backends are aware of tree semantics. - `Operation`, contained in `SmtUpdateBatch` and `SmtForestUpdateBatch` which define sets of modifications to be made to individual trees in the forest and the entire forest respectively. - An assortment of error types to ensure strongly-typed error handling throughout the SMT forest API.
Tracking PR for Plonky3 v0.4.2
* chore: changed the layout of the sponge state * fix fmt * feat: update word comparison to LE convention * fix comments * undo changes to word order * fix stale comments * remap digest to be top word of state * fix lint * chore: expose INPUT1_RANGE and INPUT2_RANGE publicly * update comment deterministic signature Falcon * chore: move input range constants into the correct struct * update comments digest position * address feedback --------- Co-authored-by: Bobbin Threadbare <[email protected]>
* feat: fuzz deserialization for miden-serde-utils and miden-crypto Fuzz targets for miden-serde-utils: - primitives, collections, string, vint64, goldilocks Fuzz targets for miden-crypto: - word, merkle, smt_serde CI runs daily via .github/workflows/fuzz.yml. * fix: bound collection allocations during deserialization Validate lengths before allocating Vec, BTreeMap, BTreeSet, String. Cap at 2^32 elements to prevent OOM from malicious input. * chore: Changelog * feat: add BudgetedReader for centralized deserialization limits Per-collection length validation (validate_deserialization_length) catches one attack vector but requires touching every Deserializable impl. A complementary approach is to limit bytes consumed at the reader level. BudgetedReader wraps any ByteReader and tracks input bytes consumed against a caller-specified budget. This provides: - Single point of enforcement (no need to modify every impl) - Caller control over budget per deserialization call - Defense in depth alongside per-collection checks Add read_from_bytes_with_budget() convenience method on Deserializable. Add fuzz target exercising budgeted deserialization. * chore: lower MAX_DESERIALIZATION_LEN to 2^24 Reduce the per-collection element limit from u32::MAX (~4 billion) to 2^24 (16 million). This caps allocation at ~128 MB per collection (at 8 bytes per element), which is generous for legitimate use while providing tighter protection against malicious input. * refactor: replace read_many with lazy read_many_iter Replace upfront-allocating read_many with a lazy iterator (read_many_iter). The iterator deserializes elements one at a time, and checks the requested count against max_alloc() before starting. BudgetedReader implements max_alloc() to derive a bound from its remaining budget. Wrap untrusted input in BudgetedReader to reject fake length prefixes and limit total consumption. Inspired by Philipp's suggestion in facebook/winterfell#377. * refactor: use size_of instead of min_serialized_size for allocation bounds Replace `Deserializable::min_serialized_size()` with `core::mem::size_of::<D>()` for the max_alloc check in read_many_iter. This simplifies the API by removing a trait method while maintaining security through BudgetedReader. Tradeoff: size_of measures in-memory size rather than wire format. For nested collections like Vec<Vec<T>>, early-abort is less precise (outer size_of is 24 bytes), but BudgetedReader still catches overruns during actual reads. - Remove min_serialized_size from Deserializable trait and all impls - Update max_alloc to use element_size parameter naming - Add tests documenting size_of behavior and limitations * refactor: add min_serialized_size() trait method for allocation bounds Replace direct size_of usage with Deserializable::min_serialized_size() to allow types to provide more accurate minimum serialized sizes. This improves the early-abort check in read_many_iter for collection types. - Add min_serialized_size() to Deserializable trait, defaulting to size_of::<Self>() - Override for Option<T> to return 1 (discriminator byte) - Override for [T; C] to return C * T::min_serialized_size() - Override for Vec, BTreeMap, BTreeSet, String to return 1 (minimum vint length prefix) - Update read_many_iter to use D::min_serialized_size() * chore: address review feedback - Update miden-serde-utils/fuzz edition to 2024 - Remove redundant [workspace] from miden-crypto-fuzz (already excluded) - Chain felts_from_u64 in EncryptedData deserialization to avoid allocation * ci: enable actual fuzzing for miden-crypto-fuzz targets - Update CI to run fuzz targets instead of just checking compilation - Use --fuzz-dir flag to work around directory structure - Build with cargo fuzz build, then run binary directly to avoid cargo-fuzz wrapper SIGPIPE issue when fuzzer exits cleanly - Add Makefile targets for all fuzz targets: word, merkle, smt_serde * chore: address review feedback
huitseeker
approved these changes
Jan 14, 2026
Contributor
huitseeker
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
![]()
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This is a tracking PR for v0.21.0 release.